This report describes the results of the first priming mindfulness study. It was initially made using Dominique Makowski’s Supplementary Materials template. Two similar templates will be created based on the two online pilot studies from the Varela project.
This is an exploratory (not confirmatory) study. The result of this exploration will be used to conduct a second, confirmatory, preregistered study.
Note also that this data has been cleaned beforehand. Five datasets
were merged (joined) through an inner join—3 Qualtrics surveys and 2
Inquisit tasks—so as to keep only participants who at least participated
at each step of the study. Missing data will be imputed later on.
Duplicates were addressed with the rempsyc::best_duplicate
function, which keeps the duplicate with the least amount of missing
values, and in case of ties, takes the first occurrence.
library(rempsyc)
library(dplyr)
library(interactions)
library(performance)
library(see)
library(ggplot2)
library(report)
library(bestNormalize)
library(psych)
library(visdat)
library(missForest)
library(doParallel)
summary(report(sessionInfo()))The analysis was done using the R Statistical language (v4.2.1; R Core Team, 2022) on Windows 10 x64, using the packages iterators (v1.0.14), doParallel (v1.0.17), interactions (v1.1.5), performance (v0.9.2.2), see (v0.7.3), report (v0.5.5.1), foreach (v1.5.2), bestNormalize (v1.8.3), psych (v2.2.5), missForest (v1.5), rempsyc (v0.0.9), visdat (v0.5.3), ggplot2 (v3.3.6) and dplyr (v1.0.9).
# Read data
data <- read.table("data/fulldataset.txt", sep = "\t", header = TRUE)
# Dummy-code group variable
data <- data %>%
mutate(condition_dum = ifelse(condition == "Mindfulness", 1, 0),
condition = as.factor(condition))
cat(report_participants(data, threshold = 1))284 participants (Country: 94.01% USA, 1.41% Canada, 1.41% India, 1.41% missing, 1.76% other)
# Allocation ratio
report(data$condition)x: 2 levels, namely Control (n = 143, 50.35%) and Mindfulness (n = 141, 49.65%)
At this stage, we define a list of our relevant variables.
# Make list of DVs
col.list <- c("blastintensity", "blastduration", "blastintensity.duration",
"blastintensity.first", "blastduration.first",
"blastintensity.duration.first", "all50trials",
"taylor_hugues", "KIMS",
"BSCS", "BAQ", "SOPT", "IAT")Why combine the intensity and duration scores? Should we? For a discussion, see:
Elson, M., Mohseni, M. R., Breuer, J., Scharkow, M., & Quandt, T. (2014). Press CRTT to measure aggressive behavior: the unstandardized use of the competitive reaction time task in aggression research. Psychological assessment, 26(2), 419. https://doi.org/10.1037/a0035569
Why use the first sound blast only instead of the average of all trials? Should we?
According to some, the Taylor Aggression Paradigm is not a measure of aggression per say, but of reactive aggression, because participants react to the other “participant’s” aggression. They suggest that for a pure measure of aggression, it is recommended to use only the first sound blast used by the participant before he receives one himself. At this stage, we attempt the analyses with all these different measures of aggression for exploratory purposes. See earlier reference to Elson et al. (2014):
We note that Lobbestael et al. (2021) suggests, based on factor analysis, separately averaging all preprovocation versus all postprovocation trials. However, this recommendation applies to the 30-trial version (we have the 25 trial version). They add:
Lobbestael, J., Emmerling, F., Brugman, S., Broers, N., Sack, A. T., Schuhmann, T., … & Arntz, A. (2021). Toward a more valid assessment of behavioral aggression: An open source platform and an empirically derived scoring method for using the Competitive Reaction Time Task (CRTT). Assessment, 28(4), 1065-1079. https://doi.org/10.1177/1073191120959757
Therefore, it is assumed safe to use and combine all 25 trials. However, we also note (again) the very high heterogeneity in quantification strategies:
See: https://www.flexiblemeasures.com/crtt/
Thus the recommandations are:
See: https://www.flexiblemeasures.com/crtt/index.php?menu=recommendations
After discussion with Dr. David Chester, it was also suggested to use the average of all 50 standardized trials, as in Chester & Lasko 2019 and Lasko & Chester (2022).
Chester & Lasko (2019). Validating a standardized approach to the Taylor Aggression Paradigm. Social Psychological and Personality Science, 10(5), 620-631. https://doi.org/10.1177/1948550618775408
Lasko & Chester (2022). Measurement invariance and item response theory analysis of the taylor aggression paradigm. Assessment, 29(5), 981-992. https://doi.org/10.1177/1073191121996450
Still, he also notes,
In one of our papers, we show that the major approaches to scoring the CRTT typically arrive at the same result, so in the end, the scoring strategy you choose is unlikely to have a large effect on your findings.
That said, technically speaking, if a participant sets the intensity to 10, but duration to 0, there is no actual sound blast for that trial. Similarly, if a participant sets the duration to 10, but intensity to 0, there is no actual sound blast for that trial. Taking the product of intensity and duration takes this dynamic into account. In contrast, other methods using the sum of intensity and duration, or yet again the average of all 50 trials (intensity and duration) does not. That said, perhaps it is not a big deal given that this particular scenario (including 0 for one of the two settings) is probably rare.
Edit: Following another discussion, Dr. Chester pointed out that it is possible to measure how often this scenario of mismatching zero intensity occurs. Let’s test this right here.
# Blastintensity == 0
data %>%
filter(blastintensity == 0) %>%
summarize(percent = round(sum(blastduration !=0)/nrow(data) * 100, 2))| percent |
|---|
| 0.35 |
# Blastduration == 0
data %>%
filter(blastduration == 0) %>%
summarize(percent = round(sum(blastintensity !=0)/nrow(data) * 100, 2))| percent |
|---|
| 0 |
So we have 0.35% and 0% of the data in this scenario, respectively.
Dr. Chester also recommends a better (but more complex) approach:
If it’s possible, a superior approach to this aggregate scoring strategy is to employ multilevel modeling instead of a univariate analysis. When you aggregate across all 50 of the individual CRTT measures, you are losing a lot of information/variability/statistical-power. Multilevel modeling on the non-aggregated data allows you to retain this variability.
Although it is not clear so far if this technique can be applied to our particular situation. Further study will be required.
Hugues Leduc suggested the possibility to use instead a two-step approach. First, calculate the average of volume and duration, for each trial. In the second step, calculate the average of the 25 trials of this new volume*duration composite. This should result in a score different than simply using the product of the average of all duration trials and of the average of all volume trials. We now add this method to the method comparison.
In this section, we are preparing the data for analysis: (a) taking
care of preliminary exclusions, (b) checking for and exploring missing
values, (d) imputing missing data with missForest, (e)
computing scale means, and (f) extracting reliability indices for our
scales.
First, we only want to keep those who agreed to keep their participation in the study after the debriefing.
data %>%
filter(debrief.consent != 1 | is.na(debrief.consent)) %>%
nrow## [1] 1
There’s 1 person that would be excluded.
data <- data %>%
filter(debrief.consent != 2)
cat(report_participants(data, threshold = 1))283 participants (Country: 93.99% USA, 1.41% Canada, 1.41% India, 1.41% missing, 1.77% other)
Second, we know that we only want to keep participants who had at
least an 80% success rate in the critical experimental manipulation
task. Let’s see how many participants have less than an 80% success
rate. Those with missing values for variable
manipsuccessleft will also be excluded since they have not
completed the critical experimental manipulation in this study.
data %>%
summarize(success.80 = sum(manipsuccessleft < .80, na.rm = TRUE),
is.na = sum(is.na(manipsuccessleft)))| success.80 | is.na |
|---|---|
| 18 | 0 |
There’s 18 people with success smaller than 80%, let’s exclude them.
data <- data %>%
filter(manipsuccessleft >= .80)
cat(report_participants(data, threshold = 1))265 participants (Country: 95.09% USA, 1.51% missing, 1.13% Canada, 2.26% other)
# Check for nice_na
nice_na(data, scales = c("BSCS", "BAQ", "KIMS"))| var | items | na | cells | na_percent | na_max | na_max_percent | all_na |
|---|---|---|---|---|---|---|---|
| BSCS_1:BSCS_7 | 7 | 0 | 1855 | 0.00 | 0 | 0.0 | 0 |
| BAQ_1:BAQ_12 | 12 | 0 | 3180 | 0.00 | 0 | 0.0 | 0 |
| KIMS_1:KIMS_39 | 39 | 0 | 10335 | 0.00 | 0 | 0.0 | 0 |
| Total | 154 | 6 | 40810 | 0.01 | 2 | 1.3 | 0 |
No missing data for our scales of interest, yeah!
Let’s check for patterns of missing data.
# Smaller subset of data for easier inspection
data %>%
select(manualworkerId:BAQ_12, IAT, SOPT,
condition, manipsuccessleft,
blastintensity.first:blastduration.first,
blastintensity, blastduration,
KIMS_1:KIMS_38) %>%
vis_miss# Let's use Little's MCAR test to confirm
# We have to proceed by "scale" because the function can only
# support 30 variables max at a time
library(naniar)
data %>%
select(BSCS_1:BSCS_7) %>%
mcar_test| statistic | df | p.value | missing.patterns |
|---|---|---|---|
| 0 | 0 | 0 | 1 |
data %>%
select(IAT_values.completed:IAT) %>%
mcar_test| statistic | df | p.value | missing.patterns |
|---|---|---|---|
| 134.0322 | 2 | 0 | 2 |
# data %>%
# select(taylor_values.completed:taylor_expressions.meanrt_target) %>%
# mcar_test
data %>%
select(taylor_values.blastduration_9:taylor_values.blastintensity_9) %>%
mcar_test| statistic | df | p.value | missing.patterns |
|---|---|---|---|
| 0 | 0 | 0 | 1 |
data %>%
select(taylor_values.blastintensity_18:taylor_values.blastintensity_225) %>%
mcar_test| statistic | df | p.value | missing.patterns |
|---|---|---|---|
| 0 | 0 | 0 | 1 |
data %>%
select(KIMS_1:KIMS_39) %>%
mcar_test| statistic | df | p.value | missing.patterns |
|---|---|---|---|
| 0 | 0 | 0 | 1 |
Here, we impute missing data with the missForest
package, as it is one of the best imputation methods. However, we have
no missing data (except Country because of a malfunction from Qualtrics’
side), so there is no imputation to do in this case.
missForest outperforms other imputation methods,
including the popular MICE (multiple imputation by chained equations).
You also don’t end up with several datasets, which makes it easier for
following analyses. Finally, it can be applied to mixed data types
(missings in numeric & categorical variables).
Waljee, A. K., Mukherjee, A., Singal, A. G., Zhang, Y., Warren, J., Balis, U., … & Higgins, P. D. (2013). Comparison of imputation methods for missing laboratory data in medicine. BMJ open, 3(8), e002847. https://doi.org/10.1093/bioinformatics/btr597
Stekhoven, D. J., & Bühlmann, P. (2012). MissForest—non-parametric missing value imputation for mixed-type data. Bioinformatics, 28(1), 112-118. https://doi.org/10.1093/bioinformatics/btr597
# Need character variables as factors
# "Error: Can not handle categorical predictors with more than 53 categories."
# So we have to temporarily remove IDs also...
new.data <- data %>%
select(-c(manualworkerId, embeddedworkerId)) %>%
mutate(across(where(is.character), as.factor))
# Parallel processing
registerDoParallel(cores = 4)
# Variables
set.seed(100)
data.imp <- missForest(new.data, verbose = TRUE, parallelize = "variables")## parallelizing over the variables of the input data matrix 'xmis'
## missForest iteration 1 in progress...done!
## estimated error(s): 0.00006369497 0.01724138
## difference(s): 0.000000000003150881 0
## time: 0.73 seconds
##
## missForest iteration 2 in progress...done!
## estimated error(s): 0.00006272885 0.01724138
## difference(s): 0.000000000002358314 0
## time: 0.39 seconds
##
## missForest iteration 3 in progress...done!
## estimated error(s): 0.00006508122 0.01724138
## difference(s): 0.0000000000008746581 0
## time: 0.36 seconds
##
## missForest iteration 4 in progress...done!
## estimated error(s): 0.00006410291 0.01724138
## difference(s): 0.0000000000002468853 0
## time: 0.39 seconds
##
## missForest iteration 5 in progress...done!
## estimated error(s): 0.0000650805 0.01724138
## difference(s): 0.00000000000006235316 0
## time: 0.38 seconds
##
## missForest iteration 6 in progress...done!
## estimated error(s): 0.00006344624 0.01724138
## difference(s): 0.0000000000004838527 0
## time: 0.34 seconds
# Total time is 2 sec (4*0.5) - 4 cores
# Extract imputed dataset
new.data <- data.imp$ximpThere are some variables we don’t actually want to impute, like country. We want to keep those NAs in that case. Let’s add them back. We also want to add ID back.
# Add ID
new.data <- bind_cols(manualworkerId = data$manualworkerId, new.data)
# Add back the NAs in country
data <- new.data %>%
mutate(country = data$country)# Reverse code items 2, 4, 6, 7
data <- data %>%
mutate(across(starts_with("BSCS"), .names = "{col}r"))
data <- data %>%
mutate(across(c(BSCS_2, BSCS_4, BSCS_6, BSCS_7), ~nice_reverse(.x, 5), .names = "{col}r"))
# Get mean BSCS
data <- data %>%
mutate(BSCS = rowMeans(select(., BSCS_1r:BSCS_7r)))# Reverse code item 7
data <- data %>%
mutate(across(starts_with("BAQ"), .names = "{col}r"))
data <- data %>%
mutate(across(BAQ_7, ~nice_reverse(.x, 7), .names = "{col}r"))
# Get sum of BAQ
data <- data %>%
mutate(BAQ = rowMeans(select(., BAQ_1r:BAQ_12r)))# Reverse code items 3-4, 8, 11-12, 14, 16, 18, 20, 22, 23-24, 27-28, 31-32, 35-36
data <- data %>%
mutate(across(starts_with("KIMS"), .names = "{col}r"))
data <- data %>%
mutate(across(all_of(paste0("KIMS_", c(3:4, 8, 11:12, 14, 16, 18, 20,
22:24, 27:28, 31:32, 35:36))),
~nice_reverse(.x, 5), .names = "{col}r"))
# Get sum of KIMS
data <- data %>%
mutate(KIMS = rowMeans(select(., KIMS_1r:KIMS_39r)))# Create new variable blastintensity.duration
data <- data %>%
mutate(blastintensity.duration = blastintensity * blastduration,
blastintensity.duration.first =
blastintensity.first * blastduration.first,
.after = blastduration)# Standardize and center all trials
data <- data %>%
mutate(across(taylor_values.blastduration_9:taylor_values.blastintensity_225,
~(scale(.x) %>% as.vector),
.names = "{col}_c"))
# Combine duration and intensity trials
data <- data %>%
mutate(all50trials = rowMeans(select(
., taylor_values.blastduration_9_c:taylor_values.blastintensity_225_c), na.rm = TRUE))
hist(data$all50trials)# First make sure we get the calculations right
# By validating a manual confirmation of the
# average volume and duration calculations
data <- data %>%
mutate(blastintensity2 = rowMeans(
select(., taylor_values.blastintensity_9:taylor_values.blastintensity_225)),
blastduration2 = rowMeans(
select(., taylor_values.blastduration_9:taylor_values.blastduration_225)))
data$blastintensity2## [1] 4.48 1.52 3.96 5.08 4.64 4.28 4.88 6.92 1.12 4.92 5.36 4.52
## [13] 4.28 4.28 4.56 7.16 3.16 5.80 4.96 8.96 3.64 4.24 3.48 5.36
## [25] 10.00 1.24 5.76 3.60 3.68 5.52 4.76 5.44 7.44 2.28 1.04 9.80
## [37] 6.32 2.76 6.16 5.00 3.52 6.04 3.80 7.96 3.96 5.60 4.64 6.72
## [49] 3.04 3.72 5.72 1.92 0.00 5.04 3.04 2.80 4.20 5.76 0.04 0.00
## [61] 5.88 4.56 6.48 5.16 9.40 4.76 4.44 5.56 4.60 5.40 3.96 4.88
## [73] 1.00 7.56 6.04 4.24 1.44 4.88 9.24 2.52 3.56 1.24 4.80 3.32
## [85] 9.44 4.52 0.00 7.80 5.04 2.84 5.44 1.00 6.48 2.76 2.20 9.08
## [97] 0.00 4.52 4.16 5.08 6.00 1.20 4.04 4.40 2.28 1.60 0.00 2.96
## [109] 3.76 6.60 0.52 5.56 6.00 2.96 0.00 5.72 1.00 6.88 3.28 7.20
## [121] 5.68 5.20 5.36 0.00 1.76 1.24 5.68 2.44 3.12 7.24 0.68 6.36
## [133] 5.64 8.28 5.96 10.00 3.72 9.96 3.44 5.80 5.36 2.32 2.56 5.48
## [145] 5.88 5.52 9.12 2.40 9.24 6.76 6.56 5.20 6.08 6.88 3.80 0.80
## [157] 10.00 5.28 0.12 2.36 6.00 6.68 7.72 9.44 10.00 5.68 7.84 3.36
## [169] 8.08 4.48 4.52 4.40 5.44 4.72 5.16 3.40 5.64 5.68 5.44 1.32
## [181] 2.60 5.60 1.36 10.00 4.96 1.36 4.32 0.00 9.40 5.68 1.60 4.20
## [193] 7.08 1.24 6.08 7.68 5.00 4.68 5.96 0.00 5.36 1.76 7.04 2.48
## [205] 4.40 4.08 0.92 7.48 5.20 4.68 2.76 6.08 2.36 6.56 3.40 1.64
## [217] 5.04 4.04 1.44 3.08 6.68 5.36 6.84 0.00 8.52 7.88 2.48 4.68
## [229] 4.12 2.40 7.00 0.00 8.72 3.80 6.52 4.32 0.96 6.04 2.88 1.76
## [241] 6.04 5.76 8.40 4.08 3.16 4.64 5.40 4.28 2.88 5.24 4.40 6.92
## [253] 6.56 3.56 0.00 0.16 8.40 4.08 2.12 6.08 4.72 4.80 4.00 7.44
## [265] 4.68
data$blastintensity2 == data$blastintensity## [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [16] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [31] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [46] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [61] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [76] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [91] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [106] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [121] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [136] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [151] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [166] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [181] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [196] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [211] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [226] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [241] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [256] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
# Success! It is the same! :)
# Let's now check for volume
data$blastduration2## [1] 4.00 1.32 3.04 5.40 5.64 4.12 3.60 5.76 5.32 5.40 5.48 4.40
## [13] 4.16 3.56 4.72 7.56 3.28 4.28 6.68 8.76 4.56 4.12 3.48 5.48
## [25] 10.00 1.32 4.68 2.96 3.68 4.04 5.32 5.96 6.68 2.60 1.40 10.00
## [37] 6.12 2.76 6.40 5.76 3.84 5.96 5.00 7.16 2.04 5.40 5.88 5.60
## [49] 2.76 4.48 5.36 0.64 0.00 4.44 3.04 2.32 4.00 5.56 0.04 0.00
## [61] 4.36 4.76 7.32 5.08 9.84 6.12 4.16 5.36 3.52 4.12 4.20 1.28
## [73] 1.00 7.20 3.36 4.20 1.52 4.80 8.48 2.88 2.56 1.28 4.64 2.16
## [85] 9.40 4.56 0.00 7.72 4.12 3.00 5.44 1.00 4.68 1.80 1.76 8.88
## [97] 0.00 4.96 3.88 4.64 6.04 8.00 3.96 1.84 1.00 1.64 0.00 2.88
## [109] 3.72 6.36 0.08 5.64 6.20 2.92 0.00 5.76 1.00 6.84 2.84 6.84
## [121] 5.72 4.28 5.52 0.16 1.88 1.12 5.40 2.48 1.04 5.28 0.68 5.48
## [133] 6.56 8.00 5.32 10.00 3.00 10.00 3.56 5.76 5.32 2.88 2.44 5.88
## [145] 6.60 5.40 9.20 3.52 8.96 3.80 5.88 5.20 2.80 6.72 2.08 0.76
## [157] 10.00 4.60 0.08 2.24 6.00 3.76 7.68 8.40 10.00 6.44 7.60 2.96
## [169] 3.72 5.96 5.00 3.40 5.80 4.72 4.80 3.20 5.28 5.32 5.04 1.24
## [181] 2.48 5.84 2.68 9.60 4.44 1.32 4.40 0.00 6.56 4.88 1.40 4.24
## [193] 7.32 1.24 4.96 8.36 4.44 4.64 3.80 0.00 4.72 2.52 6.92 1.00
## [205] 4.88 3.48 1.00 5.88 4.96 2.80 2.16 6.44 1.24 5.08 2.88 1.64
## [217] 4.92 7.08 0.72 2.96 6.52 5.48 7.20 0.00 9.00 8.52 2.48 4.32
## [229] 4.96 2.40 7.00 0.00 7.68 3.12 6.08 5.28 1.08 5.36 4.24 1.96
## [241] 5.84 5.76 8.24 4.76 4.08 4.32 4.96 2.36 3.52 4.72 3.68 7.72
## [253] 6.52 3.40 0.00 0.04 6.36 4.12 2.16 5.32 4.20 5.08 4.12 5.52
## [265] 4.68
data$blastduration2 == data$blastduration## [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [13] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [25] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [37] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [49] FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE
## [61] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [73] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [85] FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [97] TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE
## [109] FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE
## [121] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [133] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [145] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [157] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [169] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [181] FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE
## [193] FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE
## [205] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [217] FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE
## [229] FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [241] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [253] FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [265] FALSE
# Huho, not the same! That's because blastduration is in ms,
# whereas the individual trials use the 0-10 scale!
# So no way to make them match unfortuantely...
# Let's try it this way anyway.
hist(data$blastduration2)hist(data$blastduration)nice_normality(data, "blastduration2", histogram = TRUE)nice_normality(data, "blastduration", histogram = TRUE)# Looks pretty similar but doesn't seem to be exactly the same...
# Wonder if it's an artifact caused by bins... Anyhow
# Probably good method to calculate those.
# Create new mixed trials blastintensity.duration
taylor_list <- data %>%
select(taylor_values.blastduration_9:taylor_values.blastduration_225) %>%
names
num_list <- lapply(taylor_list, function(x) {
gsub("taylor_values.blastduration_", "", x)
})
for (i in num_list) {
data[paste0("taylor_values.mixed_", i)] <-
data[paste0("taylor_values.blastduration_", i)] *
data[paste0("taylor_values.blastintensity_", i)]
}
data$taylor_values.mixed_9## [1] 20 5 0 72 54 27 24 80 20 20 80 100 56 72 15 6 1 50
## [19] 12 100 60 0 0 8 100 3 25 12 25 24 35 40 80 8 80 100
## [37] 100 49 0 100 100 63 56 100 6 90 90 54 6 56 36 20 0 28
## [55] 4 4 36 35 1 0 16 25 28 60 90 100 27 42 15 20 36 0
## [73] 4 100 35 25 6 1 100 3 36 0 100 5 54 49 0 15 40 64
## [91] 80 1 42 12 100 100 0 48 56 6 42 100 9 10 5 12 0 49
## [109] 48 24 20 15 49 4 0 56 1 100 7 100 16 80 100 0 0 1
## [127] 100 28 4 20 9 0 56 9 28 100 20 100 90 42 30 42 1 56
## [145] 81 100 56 32 30 50 70 100 28 63 4 1 100 15 2 20 0 35
## [163] 48 35 100 16 49 100 81 0 16 16 42 100 0 0 35 42 28 49
## [181] 0 4 6 0 70 1 49 0 20 8 20 42 100 1 16 63 36 20
## [199] 8 0 20 8 56 1 54 28 0 20 5 10 4 100 7 50 100 1
## [217] 49 36 4 1 100 100 100 0 100 90 49 100 0 0 49 0 100 7
## [235] 72 100 8 56 81 28 100 25 40 100 0 100 42 49 100 15 40 24
## [253] 45 2 0 4 49 60 1 28 35 35 1 20 100
nice_normality(data, variable = "taylor_values.mixed_9")data$taylor_values.mixed_225## [1] 20 1 100 16 70 0 56 10 8 90 9 36 4 50 28 100 16 0
## [19] 30 90 16 40 25 100 100 1 48 4 49 30 21 30 81 16 3 100
## [37] 24 1 9 16 9 49 12 100 60 36 100 36 9 6 100 0 0 20
## [55] 16 2 36 100 0 0 100 100 0 25 81 60 12 4 12 16 16 3
## [73] 1 100 1 16 6 72 100 12 20 0 49 9 81 81 0 63 20 2
## [91] 25 1 18 10 1 1 0 100 12 49 64 0 36 20 2 2 0 4
## [109] 6 100 0 64 100 9 0 90 1 100 28 18 100 64 100 0 8 1
## [127] 40 1 6 36 1 49 81 100 100 100 0 100 4 49 25 6 90 81
## [145] 1 49 100 9 100 0 24 16 5 40 4 1 100 16 0 0 100 100
## [163] 64 100 100 36 100 25 18 20 100 25 40 100 100 35 72 49 100 0
## [181] 40 40 4 100 24 4 8 0 80 36 1 25 100 4 1 64 16 9
## [199] 3 0 100 1 81 3 72 12 1 70 35 0 4 90 1 40 1 4
## [217] 36 28 0 24 35 36 42 0 30 81 25 16 20 9 49 0 81 9
## [235] 45 8 1 49 20 0 24 36 100 32 0 20 100 0 1 32 6 100
## [253] 49 16 0 0 64 16 16 36 4 3 28 80 16
nice_normality(data, variable = "taylor_values.mixed_225")# Calculate average of all trials! :)
data <- data %>%
rowwise() %>%
mutate(taylor_hugues = mean(taylor_values.mixed_9),
#.after = taylor_values.blastintensity_225
) %>%
ungroup
nice_normality(data, variable = "taylor_hugues")x <- data %>%
select(BSCS_1r:BSCS_7r) %>%
alpha
x$feldt##
## 95% confidence boundaries (Feldt)
## lower alpha upper
## 0.8 0.84 0.86
x <- data %>%
select(BAQ_1r:BAQ_12r) %>%
alpha
x$feldt##
## 95% confidence boundaries (Feldt)
## lower alpha upper
## 0.77 0.81 0.84
x <- data %>%
select(KIMS_1r:KIMS_39r) %>%
alpha## Warning in alpha(.): Some items were negatively correlated with the total scale and probably
## should be reversed.
## To do this, run the function again with the 'check.keys=TRUE' option
## Some items ( KIMS_19r ) were negatively correlated with the total scale and
## probably should be reversed.
## To do this, run the function again with the 'check.keys=TRUE' option
x$feldt##
## 95% confidence boundaries (Feldt)
## lower alpha upper
## 0.87 0.89 0.91
We are getting the “Some items ( KIMS_17r KIMS_19r ) were negatively correlated with the total scale and probably should be reversed.” warning. Let’s check these items. In the raw data, item 17 is:
And item 19 is:
Which corresponds to items 17 and 19 as described in the original
paper (Baer, Smith, & Allen in 2004). They also make sense to not
reverse-score, theoretically speaking; they seem to relate to
mindfulness, not its absence. So it is kind of strange that
psych::alpha is suggesting to reverse-score these items.
Perhaps the scale is not meant to be used as a single factor since it
contains four factor subscales.
Note: before the imputation, this warning was worse in the sense that a lot more items were flagged in this way: “Some items ( KIMS_1r KIMS_5r KIMS_9r KIMS_13r KIMS_17r KIMS_19r KIMS_21r KIMS_25r KIMS_33r KIMS_37r KIMS_38r ) were negatively correlated with the total scale and probably should be reversed.”
In this section, we will: (a) test assumptions of normality, (b) transform variables violating assumptions, (c) test assumptions of homoscedasticity, (d) identify and winsorize outliers, and (e) conduct the t-tests.
lapply(col.list, function(x)
nice_normality(data,
variable = x,
title = x,
group = "condition",
shapiro = TRUE,
histogram = TRUE))## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
##
## [[7]]
##
## [[8]]
##
## [[9]]
##
## [[10]]
##
## [[11]]
##
## [[12]]
##
## [[13]]
Several variables are clearly skewed. Let’s apply transformations. But first, let’s deal with the working memory task, SOPT (Self-Ordered Pointing Task). It is clearly problematic.
Let’s do an histogram proper to see if it helps diagnosing the problem with SOPT.
hist(data$SOPT)That looks weird, there’s some obvious outliers here; they probably didn’t do the task correctly, especially since there’s a gap after 60 errors. Let’s see how many people made more than 60 errors.
data %>%
filter(SOPT > 60) %>%
count| n |
|---|
| 11 |
There’s 10 people with more than 60 errors. Let’s exclude them.
data <- data %>%
filter(SOPT <= 60)
cat(report_participants(data, threshold = 1))254 participants (Country: 94.88% USA, 1.57% missing, 1.18% Canada, 2.36% other)
The function below transforms variables according to the best
possible transformation (via the bestNormalize package),
and also standardizes the variables.
predict_bestNormalize <- function(var) {
x <- bestNormalize(var, standardize = FALSE, allow_orderNorm = FALSE)
print(cur_column())
print(x$chosen_transform)
cat("\n")
predict(x)
}
set.seed(100)
data <- data %>%
mutate(across(all_of(col.list),
predict_bestNormalize,
.names = "{.col}.t"))## [1] "blastintensity"
## I(x) Transformation with 254 nonmissing obs.
##
## [1] "blastduration"
## I(x) Transformation with 254 nonmissing obs.
##
## [1] "blastintensity.duration"
## Non-Standardized sqrt(x + a) Transformation with 254 nonmissing obs.:
## Relevant statistics:
## - a = 0
## - mean (before standardization) = 68.32964
## - sd (before standardization) = 32.73047
##
## [1] "blastintensity.first"
## I(x) Transformation with 254 nonmissing obs.
##
## [1] "blastduration.first"
## I(x) Transformation with 254 nonmissing obs.
##
## [1] "blastintensity.duration.first"
## Non-Standardized sqrt(x + a) Transformation with 254 nonmissing obs.:
## Relevant statistics:
## - a = 0
## - mean (before standardization) = 79.35357
## - sd (before standardization) = 45.43182
##
## [1] "all50trials"
## Non-Standardized asinh(x) Transformation with 254 nonmissing obs.:
## Relevant statistics:
## - mean (before standardization) = -0.001141875
## - sd (before standardization) = 0.6295642
##
## [1] "taylor_hugues"
## Non-Standardized sqrt(x + a) Transformation with 254 nonmissing obs.:
## Relevant statistics:
## - a = 0
## - mean (before standardization) = 5.24846
## - sd (before standardization) = 3.325689
##
## [1] "KIMS"
## Non-Standardized Log_b(x + a) Transformation with 254 nonmissing obs.:
## Relevant statistics:
## - a = 0
## - b = 10
## - mean (before standardization) = 0.5315526
## - sd (before standardization) = 0.06109914
##
## [1] "BSCS"
## I(x) Transformation with 254 nonmissing obs.
##
## [1] "BAQ"
## Non-Standardized Log_b(x + a) Transformation with 254 nonmissing obs.:
## Relevant statistics:
## - a = 0
## - b = 10
## - mean (before standardization) = 0.462038
## - sd (before standardization) = 0.1387411
##
## [1] "SOPT"
## Non-Standardized sqrt(x + a) Transformation with 254 nonmissing obs.:
## Relevant statistics:
## - a = 0
## - mean (before standardization) = 3.461596
## - sd (before standardization) = 1.12008
##
## [1] "IAT"
## I(x) Transformation with 254 nonmissing obs.
col.list <- paste0(col.list, ".t")Note. The I(x) transformations above are actually not transformations, but a shorthand function for passing the data “as is”. Suggesting the package estimated the various attempted transformations did not improve normality in those cases, so no transformation is used. This only appears when standardize is set to FALSE. When set to TRUE, for those variables, it is actually center_scale(x), suggesting that the data are only CENTERED because they need no transformation (no need to be scaled), only to be centered.
Let’s check if normality was corrected.
# Group normality
lapply(col.list, function(x)
nice_normality(data,
x,
"condition",
shapiro = TRUE,
title = x,
histogram = TRUE))## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
##
## [[7]]
##
## [[8]]
##
## [[9]]
##
## [[10]]
##
## [[11]]
##
## [[12]]
##
## [[13]]
Looks rather reasonable now, though not perfect (fortunately t-tests are quite robust against violations of normality).
We can now resume with the next step: checking variance.
# Plotting variance
plots(lapply(col.list, function(x) {
nice_varplot(data, x, group = "condition")
}),
n_columns = 3)Variance looks good. No group has four times the variance of any other group. We can now resume with checking outliers.
# Using boxplots
plots(lapply(col.list, function(x) {
ggplot(data, aes(condition, !!sym(x))) +
geom_boxplot()
}),
n_columns = 3)There are some outliers, but nothing unreasonable. Let’s still check with the 3 median absolute deviations (MAD) method.
find_mad(data, col.list, criteria = 3)## $SOPT.t
## # A tibble: 7 × 2
## Row[,1] SOPT.t_mad
## <int> <dbl>
## 1 38 3.18
## 2 56 -3.33
## 3 61 3.02
## 4 124 3.77
## 5 144 3.85
## 6 177 -3.33
## 7 215 3.63
##
## $IAT.t
## # A tibble: 1 × 2
## Row[,1] IAT.t_mad
## <int> <dbl>
## 1 242 3.03
##
## attr(,"class")
## [1] "find_mad"
## attr(,"outlier_list")
## attr(,"outlier_list")$SOPT.t
## # A tibble: 7 × 2
## Row[,1] SOPT.t_mad
## <int> <dbl>
## 1 38 3.18
## 2 56 -3.33
## 3 61 3.02
## 4 124 3.77
## 5 144 3.85
## 6 177 -3.33
## 7 215 3.63
##
## attr(,"outlier_list")$IAT.t
## # A tibble: 1 × 2
## Row[,1] IAT.t_mad
## <int> <dbl>
## 1 242 3.03
##
## attr(,"outlier_total")
## # A tibble: 8 × 14
## Row[,1] blastintensi…¹ blast…² blast…³ blast…⁴ blast…⁵ blast…⁶ all50…⁷ taylo…⁸
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 38 0.141 0.563 0.279 0.674 1.12 1.16 0.336 1.19
## 2 56 -2.18 -2.81 -2.46 -1.35 -0.904 -1.23 -1.83 -1.08
## 3 61 0.215 0.271 0.201 -0.225 1.12 0.518 0.212 0.618
## 4 124 0.440 0.917 0.600 0.225 0.445 0.518 0.640 0.552
## 5 144 1.02 0.992 0.957 0.450 0.445 0.651 0.932 0.667
## 6 177 -2.20 -2.87 -2.49 -1.57 -1.58 -1.67 -1.84 -1.33
## 7 215 1.49 1.79 1.56 0.450 1.12 1.01 1.43 1.06
## 8 242 0.871 0.898 0.835 0.450 0 0.379 0.812 0.357
## # … with 5 more variables: KIMS.t_mad <dbl>, BSCS.t_mad <dbl>, BAQ.t_mad <dbl>,
## # SOPT.t_mad <dbl>, IAT.t_mad <dbl>, and abbreviated variable names
## # ¹blastintensity.t_mad, ²blastduration.t_mad,
## # ³blastintensity.duration.t_mad, ⁴blastintensity.first.t_mad,
## # ⁵blastduration.first.t_mad, ⁶blastintensity.duration.first.t_mad,
## # ⁷all50trials.t_mad, ⁸taylor_hugues.t_mad
## attr(,"outlier_multiple")
## # A tibble: 0 × 2
## # … with 2 variables: Row <int[,1]>, n <int>
## attr(,"criteria")
## [1] 3
## attr(,"col.list")
## [1] "blastintensity.t" "blastduration.t"
## [3] "blastintensity.duration.t" "blastintensity.first.t"
## [5] "blastduration.first.t" "blastintensity.duration.first.t"
## [7] "all50trials.t" "taylor_hugues.t"
## [9] "KIMS.t" "BSCS.t"
## [11] "BAQ.t" "SOPT.t"
## [13] "IAT.t"
There are 6 outliers after our transformations.
Visual assessment and the MAD method confirm we have some outlier values. We could ignore them but because they could have disproportionate influence on the models, one recommendation is to winsorize them by bringing the values at 3 SD. Instead of using the standard deviation around the mean, however, we use the absolute deviation around the median, as it is more robust to extreme observations. For a discussion, see:
Leys, C., Klein, O., Bernard, P., & Licata, L. (2013). Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4), 764–766. https://doi.org/10.1016/j.jesp.2013.03.013
# Winsorize variables of interest with MAD
data <- data %>%
mutate(across(all_of(col.list),
winsorize_mad,
.names = "{.col}.w"))
# Update col.list
col.list <- paste0(col.list, ".w")Outliers are still present but were brought back within reasonable limits, where applicable.
We can now standardize our variables.
data <- data %>%
mutate(across(all_of(col.list),
function(x) {
as.numeric(scale(x))
},
.names = "{col}.s"))
# Update col.list
col.list <- paste0(col.list, ".s")We are now ready to compare the group condition (Control vs. Mindfulness Priming) across our different variables with the t-tests.
nice_t_test(data,
response = col.list,
group = "condition") %>%
nice_table(highlight = 0.10, width = .80)## [97mUsing Welch t-test (base R's default; cf. https://doi.org/10.5334/irsp.82).
## For the Student t-test, use `var.equal = TRUE`.
## [97m
Dependent Variable | t | df | p | d | 95% CI |
blastintensity.t.w.s | -1.42 | 246.17 | .157 | -0.18 | [-0.42, 0.07] |
blastduration.t.w.s | -0.45 | 248.55 | .654 | -0.06 | [-0.30, 0.19] |
blastintensity.duration.t.w.s | -1.03 | 247.00 | .303 | -0.13 | [-0.38, 0.12] |
blastintensity.first.t.w.s | 0.14 | 251.33 | .889 | 0.02 | [-0.23, 0.26] |
blastduration.first.t.w.s | 0.23 | 251.71 | .822 | 0.03 | [-0.22, 0.27] |
blastintensity.duration.first.t.w.s | 0.18 | 251.07 | .858 | 0.02 | [-0.22, 0.27] |
all50trials.t.w.s | -0.96 | 248.03 | .340 | -0.12 | [-0.37, 0.13] |
taylor_hugues.t.w.s | 0.15 | 251.30 | .883 | 0.02 | [-0.23, 0.26] |
KIMS.t.w.s | -1.03 | 249.34 | .306 | -0.13 | [-0.38, 0.12] |
BSCS.t.w.s | -0.77 | 250.26 | .442 | -0.10 | [-0.34, 0.15] |
BAQ.t.w.s | 1.49 | 251.09 | .138 | 0.19 | [-0.06, 0.43] |
SOPT.t.w.s | 0.12 | 248.85 | .904 | 0.02 | [-0.23, 0.26] |
IAT.t.w.s | 1.55 | 251.65 | .122 | 0.19 | [-0.05, 0.44] |
Interpretation: There is no clear group effect from our experimental condition on our different variables. There used to be a marginal effect on blastintensity, but it seems these have moved to the BAQ and IAT. But we know that the experimental manipulation could not have influenced these tasks, because they happen chronologically earlier. We still look at blastintensity visually for theoretical reasons and help choose the best measure.
nice_violin(data,
group = "condition",
response = "blastintensity.t.w.s",
comp1 = 1,
comp2 = 2,
obs = TRUE,
has.d = TRUE,
d.y = 1)nice_violin(data,
group = "condition",
response = "blastduration.t.w.s",
comp1 = 1,
comp2 = 2,
obs = TRUE,
has.d = TRUE,
d.y = 1)nice_violin(data,
group = "condition",
response = "blastintensity.duration.t.w.s",
comp1 = 1,
comp2 = 2,
obs = TRUE,
has.d = TRUE,
d.y = 1)nice_violin(data,
group = "condition",
response = "blastintensity.first.t.w.s",
comp1 = 1,
comp2 = 2,
obs = TRUE,
has.d = TRUE,
d.y = -0.8)nice_violin(data,
group = "condition",
response = "blastduration.first.t.w.s",
comp1 = 1,
comp2 = 2,
obs = TRUE,
has.d = TRUE,
d.y = -1.5)nice_violin(data,
group = "condition",
response = "blastintensity.duration.first.t.w.s",
comp1 = 1,
comp2 = 2,
obs = TRUE,
has.d = TRUE,
d.y = -1.2)Let’s extract the means and standard deviations for journal reporting.
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity),
SD = sd(blastintensity),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 4.39 | 2.30 | 130 |
Mindfulness | 4.82 | 2.56 | 124 |
data %>%
group_by(condition) %>%
summarize(M = mean(blastduration),
SD = sd(blastduration),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 1,018.10 | 436.17 | 130 |
Mindfulness | 1,043.62 | 468.08 | 124 |
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity.duration),
SD = sd(blastintensity.duration),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 5,353.38 | 4,205.28 | 130 |
Mindfulness | 6,137.15 | 4,996.60 | 124 |
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity.first),
SD = sd(blastintensity.first),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 5.92 | 3.44 | 130 |
Mindfulness | 5.85 | 3.45 | 124 |
data %>%
group_by(condition) %>%
summarize(M = mean(blastduration.first),
SD = sd(blastduration.first),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 1,155.38 | 637.53 | 130 |
Mindfulness | 1,137.50 | 629.04 | 124 |
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity.duration.first),
SD = sd(blastintensity.duration.first),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 8,405.31 | 6,964.25 | 130 |
Mindfulness | 8,297.98 | 6,886.46 | 124 |
Let’s see if our variables don’t interact together with our experimental condition. But first, let’s test the models assumptions.
big.mod1 <- lm(blastintensity.t.w.s ~ condition_dum*KIMS.t.w.s +
condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s,
data = data, na.action="na.exclude")
check_model(big.mod1)big.mod2 <- lm(blastduration.t.w.s ~ condition_dum*KIMS.t.w.s +
condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s,
data = data, na.action="na.exclude")
check_model(big.mod2)big.mod3 <- lm(blastintensity.duration.t.w.s ~ condition_dum*KIMS.t.w.s +
condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s,
data = data, na.action="na.exclude")
check_model(big.mod3)big.mod4 <- lm(blastintensity.first.t.w.s ~ condition_dum*KIMS.t.w.s +
condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s,
data = data, na.action="na.exclude")
check_model(big.mod4)big.mod5 <- lm(blastduration.first.t.w.s ~ condition_dum*KIMS.t.w.s +
condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s,
data = data, na.action="na.exclude")
check_model(big.mod5)big.mod6 <- lm(blastintensity.duration.first.t.w.s ~ condition_dum*KIMS.t.w.s +
condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s,
data = data, na.action="na.exclude")
check_model(big.mod6)All the models assumptions look pretty good overall actually, even with all these variables. The lines for linearity and homoscedasticity are a bit skewed but nothing too crazy. Let’s now look at the results.
big.mod1 %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
blastintensity.t.w.s | condition_dum | 242 | 0.21 | 1.79 | .075 | 0.01 |
blastintensity.t.w.s | KIMS.t.w.s | 242 | 0.13 | 1.31 | .193 | 0.01 |
blastintensity.t.w.s | BSCS.t.w.s | 242 | -0.14 | -1.43 | .154 | 0.01 |
blastintensity.t.w.s | BAQ.t.w.s | 242 | 0.11 | 1.07 | .286 | 0.00 |
blastintensity.t.w.s | SOPT.t.w.s | 242 | 0.21 | 2.37 | .019 | 0.02 |
blastintensity.t.w.s | IAT.t.w.s | 242 | 0.16 | 1.88 | .061 | 0.01 |
blastintensity.t.w.s | condition_dum:KIMS.t.w.s | 242 | -0.24 | -1.73 | .085 | 0.01 |
blastintensity.t.w.s | condition_dum:BSCS.t.w.s | 242 | 0.45 | 3.03 | .003 | 0.03 |
blastintensity.t.w.s | condition_dum:BAQ.t.w.s | 242 | 0.08 | 0.55 | .584 | 0.00 |
blastintensity.t.w.s | condition_dum:SOPT.t.w.s | 242 | 0.07 | 0.59 | .555 | 0.00 |
blastintensity.t.w.s | condition_dum:IAT.t.w.s | 242 | -0.17 | -1.38 | .168 | 0.01 |
big.mod2 %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
blastduration.t.w.s | condition_dum | 242 | 0.10 | 0.83 | .408 | 0.00 |
blastduration.t.w.s | KIMS.t.w.s | 242 | 0.12 | 1.17 | .242 | 0.00 |
blastduration.t.w.s | BSCS.t.w.s | 242 | -0.14 | -1.42 | .156 | 0.01 |
blastduration.t.w.s | BAQ.t.w.s | 242 | 0.04 | 0.44 | .664 | 0.00 |
blastduration.t.w.s | SOPT.t.w.s | 242 | 0.28 | 3.11 | .002 | 0.03 |
blastduration.t.w.s | IAT.t.w.s | 242 | 0.22 | 2.57 | .011 | 0.02 |
blastduration.t.w.s | condition_dum:KIMS.t.w.s | 242 | -0.25 | -1.78 | .077 | 0.01 |
blastduration.t.w.s | condition_dum:BSCS.t.w.s | 242 | 0.50 | 3.41 | .001 | 0.04 |
blastduration.t.w.s | condition_dum:BAQ.t.w.s | 242 | 0.14 | 1.04 | .301 | 0.00 |
blastduration.t.w.s | condition_dum:SOPT.t.w.s | 242 | 0.05 | 0.41 | .685 | 0.00 |
blastduration.t.w.s | condition_dum:IAT.t.w.s | 242 | -0.17 | -1.40 | .162 | 0.01 |
big.mod3 %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
blastintensity.duration.t.w.s | condition_dum | 242 | 0.17 | 1.43 | .154 | 0.01 |
blastintensity.duration.t.w.s | KIMS.t.w.s | 242 | 0.13 | 1.32 | .187 | 0.01 |
blastintensity.duration.t.w.s | BSCS.t.w.s | 242 | -0.14 | -1.47 | .144 | 0.01 |
blastintensity.duration.t.w.s | BAQ.t.w.s | 242 | 0.09 | 0.87 | .384 | 0.00 |
blastintensity.duration.t.w.s | SOPT.t.w.s | 242 | 0.25 | 2.80 | .005 | 0.03 |
blastintensity.duration.t.w.s | IAT.t.w.s | 242 | 0.19 | 2.19 | .030 | 0.02 |
blastintensity.duration.t.w.s | condition_dum:KIMS.t.w.s | 242 | -0.25 | -1.82 | .070 | 0.01 |
blastintensity.duration.t.w.s | condition_dum:BSCS.t.w.s | 242 | 0.49 | 3.30 | .001 | 0.04 |
blastintensity.duration.t.w.s | condition_dum:BAQ.t.w.s | 242 | 0.11 | 0.78 | .437 | 0.00 |
blastintensity.duration.t.w.s | condition_dum:SOPT.t.w.s | 242 | 0.06 | 0.50 | .620 | 0.00 |
blastintensity.duration.t.w.s | condition_dum:IAT.t.w.s | 242 | -0.17 | -1.38 | .168 | 0.01 |
big.mod4 %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
blastintensity.first.t.w.s | condition_dum | 242 | 0.02 | 0.14 | .888 | 0.00 |
blastintensity.first.t.w.s | KIMS.t.w.s | 242 | -0.05 | -0.50 | .621 | 0.00 |
blastintensity.first.t.w.s | BSCS.t.w.s | 242 | -0.07 | -0.68 | .500 | 0.00 |
blastintensity.first.t.w.s | BAQ.t.w.s | 242 | 0.11 | 1.08 | .282 | 0.00 |
blastintensity.first.t.w.s | SOPT.t.w.s | 242 | 0.19 | 2.00 | .047 | 0.01 |
blastintensity.first.t.w.s | IAT.t.w.s | 242 | 0.15 | 1.72 | .086 | 0.01 |
blastintensity.first.t.w.s | condition_dum:KIMS.t.w.s | 242 | 0.00 | 0.03 | .973 | 0.00 |
blastintensity.first.t.w.s | condition_dum:BSCS.t.w.s | 242 | 0.32 | 2.11 | .036 | 0.02 |
blastintensity.first.t.w.s | condition_dum:BAQ.t.w.s | 242 | -0.05 | -0.35 | .728 | 0.00 |
blastintensity.first.t.w.s | condition_dum:SOPT.t.w.s | 242 | 0.01 | 0.10 | .917 | 0.00 |
blastintensity.first.t.w.s | condition_dum:IAT.t.w.s | 242 | -0.10 | -0.81 | .420 | 0.00 |
big.mod5 %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
blastduration.first.t.w.s | condition_dum | 242 | 0.00 | 0.01 | .989 | 0.00 |
blastduration.first.t.w.s | KIMS.t.w.s | 242 | -0.09 | -0.83 | .405 | 0.00 |
blastduration.first.t.w.s | BSCS.t.w.s | 242 | -0.03 | -0.29 | .775 | 0.00 |
blastduration.first.t.w.s | BAQ.t.w.s | 242 | 0.02 | 0.16 | .873 | 0.00 |
blastduration.first.t.w.s | SOPT.t.w.s | 242 | 0.14 | 1.53 | .127 | 0.01 |
blastduration.first.t.w.s | IAT.t.w.s | 242 | 0.12 | 1.35 | .180 | 0.01 |
blastduration.first.t.w.s | condition_dum:KIMS.t.w.s | 242 | -0.04 | -0.26 | .794 | 0.00 |
blastduration.first.t.w.s | condition_dum:BSCS.t.w.s | 242 | 0.34 | 2.22 | .027 | 0.02 |
blastduration.first.t.w.s | condition_dum:BAQ.t.w.s | 242 | 0.15 | 1.03 | .303 | 0.00 |
blastduration.first.t.w.s | condition_dum:SOPT.t.w.s | 242 | 0.08 | 0.64 | .523 | 0.00 |
blastduration.first.t.w.s | condition_dum:IAT.t.w.s | 242 | -0.13 | -1.02 | .308 | 0.00 |
big.mod6 %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
blastintensity.duration.first.t.w.s | condition_dum | 242 | 0.01 | 0.09 | .925 | 0.00 |
blastintensity.duration.first.t.w.s | KIMS.t.w.s | 242 | -0.09 | -0.84 | .402 | 0.00 |
blastintensity.duration.first.t.w.s | BSCS.t.w.s | 242 | -0.05 | -0.47 | .639 | 0.00 |
blastintensity.duration.first.t.w.s | BAQ.t.w.s | 242 | 0.08 | 0.78 | .435 | 0.00 |
blastintensity.duration.first.t.w.s | SOPT.t.w.s | 242 | 0.17 | 1.85 | .066 | 0.01 |
blastintensity.duration.first.t.w.s | IAT.t.w.s | 242 | 0.12 | 1.35 | .180 | 0.01 |
blastintensity.duration.first.t.w.s | condition_dum:KIMS.t.w.s | 242 | -0.01 | -0.09 | .929 | 0.00 |
blastintensity.duration.first.t.w.s | condition_dum:BSCS.t.w.s | 242 | 0.37 | 2.42 | .016 | 0.02 |
blastintensity.duration.first.t.w.s | condition_dum:BAQ.t.w.s | 242 | 0.06 | 0.43 | .671 | 0.00 |
blastintensity.duration.first.t.w.s | condition_dum:SOPT.t.w.s | 242 | 0.04 | 0.30 | .768 | 0.00 |
blastintensity.duration.first.t.w.s | condition_dum:IAT.t.w.s | 242 | -0.11 | -0.86 | .390 | 0.00 |
big.mod7 %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
all50trials.t.w.s | condition_dum | 242 | 0.16 | 1.36 | .174 | 0.01 |
all50trials.t.w.s | KIMS.t.w.s | 242 | 0.11 | 1.15 | .253 | 0.00 |
all50trials.t.w.s | BSCS.t.w.s | 242 | -0.13 | -1.33 | .186 | 0.01 |
all50trials.t.w.s | BAQ.t.w.s | 242 | 0.08 | 0.84 | .401 | 0.00 |
all50trials.t.w.s | SOPT.t.w.s | 242 | 0.25 | 2.73 | .007 | 0.03 |
all50trials.t.w.s | IAT.t.w.s | 242 | 0.20 | 2.35 | .020 | 0.02 |
all50trials.t.w.s | condition_dum:KIMS.t.w.s | 242 | -0.23 | -1.64 | .103 | 0.01 |
all50trials.t.w.s | condition_dum:BSCS.t.w.s | 242 | 0.46 | 3.07 | .002 | 0.03 |
all50trials.t.w.s | condition_dum:BAQ.t.w.s | 242 | 0.11 | 0.80 | .424 | 0.00 |
all50trials.t.w.s | condition_dum:SOPT.t.w.s | 242 | 0.06 | 0.46 | .643 | 0.00 |
all50trials.t.w.s | condition_dum:IAT.t.w.s | 242 | -0.18 | -1.47 | .143 | 0.01 |
big.mod8 %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
taylor_hugues.t.w.s | condition_dum | 242 | 0.02 | 0.12 | .901 | 0.00 |
taylor_hugues.t.w.s | KIMS.t.w.s | 242 | -0.09 | -0.86 | .389 | 0.00 |
taylor_hugues.t.w.s | BSCS.t.w.s | 242 | -0.04 | -0.41 | .681 | 0.00 |
taylor_hugues.t.w.s | BAQ.t.w.s | 242 | 0.07 | 0.72 | .475 | 0.00 |
taylor_hugues.t.w.s | SOPT.t.w.s | 242 | 0.18 | 1.88 | .061 | 0.01 |
taylor_hugues.t.w.s | IAT.t.w.s | 242 | 0.11 | 1.27 | .205 | 0.01 |
taylor_hugues.t.w.s | condition_dum:KIMS.t.w.s | 242 | -0.01 | -0.05 | .956 | 0.00 |
taylor_hugues.t.w.s | condition_dum:BSCS.t.w.s | 242 | 0.36 | 2.35 | .020 | 0.02 |
taylor_hugues.t.w.s | condition_dum:BAQ.t.w.s | 242 | 0.07 | 0.49 | .622 | 0.00 |
taylor_hugues.t.w.s | condition_dum:SOPT.t.w.s | 242 | 0.04 | 0.30 | .767 | 0.00 |
taylor_hugues.t.w.s | condition_dum:IAT.t.w.s | 242 | -0.10 | -0.77 | .440 | 0.00 |
big.mod9 <- lm(blastintensity.duration.t.w.s ~ condition_dum*KIMS.t.w.s +
condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s +
blastintensity.t.w.s + blastduration.t.w.s,
data = data, na.action="na.exclude")
check_model(big.mod9)big.mod9 %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
blastintensity.duration.t.w.s | condition_dum | 240 | 0.00 | 0.05 | .961 | 0.00 |
blastintensity.duration.t.w.s | KIMS.t.w.s | 240 | 0.00 | 0.91 | .364 | 0.00 |
blastintensity.duration.t.w.s | BSCS.t.w.s | 240 | -0.00 | -0.10 | .917 | 0.00 |
blastintensity.duration.t.w.s | BAQ.t.w.s | 240 | 0.00 | 0.94 | .347 | 0.00 |
blastintensity.duration.t.w.s | SOPT.t.w.s | 240 | 0.01 | 1.39 | .167 | 0.00 |
blastintensity.duration.t.w.s | IAT.t.w.s | 240 | -0.00 | -0.57 | .568 | 0.00 |
blastintensity.duration.t.w.s | blastintensity.t.w.s | 240 | 0.60 | 87.28 | < .001 | 0.06 |
blastintensity.duration.t.w.s | blastduration.t.w.s | 240 | 0.42 | 60.59 | < .001 | 0.03 |
blastintensity.duration.t.w.s | condition_dum:KIMS.t.w.s | 240 | -0.00 | -0.60 | .550 | 0.00 |
blastintensity.duration.t.w.s | condition_dum:BSCS.t.w.s | 240 | 0.01 | 0.75 | .455 | 0.00 |
blastintensity.duration.t.w.s | condition_dum:BAQ.t.w.s | 240 | 0.00 | 0.28 | .782 | 0.00 |
blastintensity.duration.t.w.s | condition_dum:SOPT.t.w.s | 240 | -0.00 | -0.65 | .520 | 0.00 |
blastintensity.duration.t.w.s | condition_dum:IAT.t.w.s | 240 | 0.01 | 0.86 | .388 | 0.00 |
Interpretation: The condition by trait self-control (brief self-control scale, BSCS) interaction comes up for all variables (so it must be somewhat reliable).
Let’s plot the main significant interaction(s).
interact_plot(big.mod1, pred = "condition_dum", modx = "BSCS.t.w.s",
modxvals = NULL, interval = TRUE, x.label = "Condition",
pred.labels = c("Control", "Mindfulness"),
legend.main = "Trait Self-Control")interact_plot(big.mod2, pred = "condition_dum", modx = "BSCS.t.w.s",
modxvals = NULL, interval = TRUE, x.label = "Condition",
pred.labels = c("Control", "Mindfulness"),
legend.main = "Trait Self-Control")interact_plot(big.mod3, pred = "condition_dum", modx = "BSCS.t.w.s",
modxvals = NULL, interval = TRUE, x.label = "Condition",
pred.labels = c("Control", "Mindfulness"),
legend.main = "Trait Self-Control")interact_plot(big.mod4, pred = "condition_dum", modx = "BSCS.t.w.s",
modxvals = NULL, interval = TRUE, x.label = "Condition",
pred.labels = c("Control", "Mindfulness"),
legend.main = "Trait Self-Control")interact_plot(big.mod5, pred = "condition_dum", modx = "BSCS.t.w.s",
modxvals = NULL, interval = TRUE, x.label = "Condition",
pred.labels = c("Control", "Mindfulness"),
legend.main = "Trait Self-Control")interact_plot(big.mod6, pred = "condition_dum", modx = "BSCS.t.w.s",
modxvals = NULL, x.label = "Condition", interval = TRUE,
pred.labels = c("Control", "Mindfulness"),
legend.main = "Trait Self-Control")Interpretation: The interaction is pretty much the same for all models. Counterintuitively, for people with low self-control, the priming mindfulness condition relates to lower aggression relative to the control condition. In contrast, for people with high self-control, the priming mindfulness condition relates to higher aggression.
Let’s look at the simple slopes now (only for the significant interaction).
big.mod1 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.t.w.s",
b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | β | t | p | sr2 |
blastintensity.t.w.s | condition_dum (LOW-BSCS.t.w.s) | 242 | -0.24 | -1.25 | .213 | 0.01 |
blastintensity.t.w.s | condition_dum (MEAN-BSCS.t.w.s) | 242 | 0.21 | 1.79 | .075 | 0.01 |
blastintensity.t.w.s | condition_dum (HIGH-BSCS.t.w.s) | 242 | 0.67 | 3.49 | .001 | 0.04 |
big.mod2 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.t.w.s",
b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | β | t | p | sr2 |
blastduration.t.w.s | condition_dum (LOW-BSCS.t.w.s) | 242 | -0.41 | -2.15 | .033 | 0.02 |
blastduration.t.w.s | condition_dum (MEAN-BSCS.t.w.s) | 242 | 0.10 | 0.83 | .408 | 0.00 |
blastduration.t.w.s | condition_dum (HIGH-BSCS.t.w.s) | 242 | 0.60 | 3.19 | .002 | 0.03 |
big.mod3 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.t.w.s",
b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | β | t | p | sr2 |
blastintensity.duration.t.w.s | condition_dum (LOW-BSCS.t.w.s) | 242 | -0.32 | -1.68 | .094 | 0.01 |
blastintensity.duration.t.w.s | condition_dum (MEAN-BSCS.t.w.s) | 242 | 0.17 | 1.43 | .154 | 0.01 |
blastintensity.duration.t.w.s | condition_dum (HIGH-BSCS.t.w.s) | 242 | 0.66 | 3.48 | .001 | 0.04 |
big.mod4 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.t.w.s",
b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | β | t | p | sr2 |
blastintensity.first.t.w.s | condition_dum (LOW-BSCS.t.w.s) | 242 | -0.31 | -1.56 | .121 | 0.01 |
blastintensity.first.t.w.s | condition_dum (MEAN-BSCS.t.w.s) | 242 | 0.02 | 0.14 | .888 | 0.00 |
blastintensity.first.t.w.s | condition_dum (HIGH-BSCS.t.w.s) | 242 | 0.34 | 1.74 | .083 | 0.01 |
big.mod5 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.t.w.s",
b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | β | t | p | sr2 |
blastduration.first.t.w.s | condition_dum (LOW-BSCS.t.w.s) | 242 | -0.34 | -1.73 | .086 | 0.01 |
blastduration.first.t.w.s | condition_dum (MEAN-BSCS.t.w.s) | 242 | 0.00 | 0.01 | .989 | 0.00 |
blastduration.first.t.w.s | condition_dum (HIGH-BSCS.t.w.s) | 242 | 0.35 | 1.75 | .082 | 0.01 |
big.mod6 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.t.w.s",
b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | β | t | p | sr2 |
blastintensity.duration.first.t.w.s | condition_dum (LOW-BSCS.t.w.s) | 242 | -0.36 | -1.83 | .069 | 0.01 |
blastintensity.duration.first.t.w.s | condition_dum (MEAN-BSCS.t.w.s) | 242 | 0.01 | 0.09 | .925 | 0.00 |
blastintensity.duration.first.t.w.s | condition_dum (HIGH-BSCS.t.w.s) | 242 | 0.38 | 1.95 | .052 | 0.01 |
big.mod7 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.t.w.s",
b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | β | t | p | sr2 |
all50trials.t.w.s | condition_dum (LOW-BSCS.t.w.s) | 242 | -0.29 | -1.54 | .124 | 0.01 |
all50trials.t.w.s | condition_dum (MEAN-BSCS.t.w.s) | 242 | 0.16 | 1.36 | .174 | 0.01 |
all50trials.t.w.s | condition_dum (HIGH-BSCS.t.w.s) | 242 | 0.62 | 3.25 | .001 | 0.04 |
big.mod8 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.t.w.s",
b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | β | t | p | sr2 |
taylor_hugues.t.w.s | condition_dum (LOW-BSCS.t.w.s) | 242 | -0.35 | -1.76 | .080 | 0.01 |
taylor_hugues.t.w.s | condition_dum (MEAN-BSCS.t.w.s) | 242 | 0.02 | 0.12 | .901 | 0.00 |
taylor_hugues.t.w.s | condition_dum (HIGH-BSCS.t.w.s) | 242 | 0.38 | 1.92 | .056 | 0.01 |
Interpretation: The effect of priming mindfulness on blast intensity is only significant for people with a high self-control.
Based on the results, it seems that the interaction of interest comes up for all six measures of blast aggression (intensity, duration, the combination of the two, and the first blast of each type), suggesting it is reliable.
The effect sizes are slightly lower for measures of first blast (sr2 = 0.2) than the average intensity or duration (sr2 = 0.35), or intensity * duration (sr2 = 0.4).
Therefore, based on the marginally larger effect size, perhaps it does make sense to use the intensity * duration combination in future studies. My intuition is also that the effect is more reliable for reactive aggression (all trials) than proactive aggression (first measure only). Another reason to avoid using only the first trials is that they lead to problems with the distributions (i.e., they are not normal and are difficult to transform to normality).
There is another reason why using the product of intensity and duration might make more theoretical sense. Technically speaking, if a participant sets the intensity to 10, but duration to 0, there is no actual sound blast for that trial. Similarly, if a participant sets the duration to 10, but intensity to 0, there is no actual sound blast for that trial. Taking the product of intensity and duration takes this dynamic into account. In contrast, other methods using the sum of intensity and duration, or yet again the average of all 50 trials (intensity and duration) does not. Taking the average of that trial with intensity = 10 and duration = 0 would yield 5, whereas it should be 0 because no sound was actually administered for that trial.
For the preregistration, I would suggest committing to using the
average product of intensity and duration of all trials, and optimally
transform it to normality via the bestNormalize package, as
suggested above, and using the same design.
This strategy is similar to those who use the product of intensity with the log of duration, so the following papers could be cited (next tab).
Volume x Duration, average of all trials (25)
Arriaga, P., Monteiro, M. B., & Esteves, F. (2011). Effects of playing violent computer games on emotional desensitization and aggressive behavior. Journal of Applied Social Psychology, 41(8), 1900-1925. https://doi.org/10.1111/j.1559-1816.2011.00791.x
Participants’ ratings of the two indexes of noise (i.e., intensity, duration) that were administered to the opponent were multiplied to compute a global measure of aggressive behavior (see Bartholow et al., 2005).
Volume x Duration, multiplied averages of all trials (25)
Bartholow, B. D., Sestir, M. A., & Davis, E. B. (2005). Correlates and consequences of exposure to video game violence: Hostile personality, empathy, and aggressive behavior. Personality and Social Psychology Bulletin, 31(11), 1573-1586. https://doi.org/10.1177/0146167205277205
In this study, average noise intensity and duration settings were multiplied to form a composite aggressive behavior score.
Volume x log(Duration), average of all trials (25) / linear/quadratic contrasts across all trials (25)
Lindsay, J. J. & Anderson, C. A. (2000). From antecedent conditions to violent actions: A general affective aggression model. Personality and Social Psychology Bulletin, 26(5), 533-547. https://doi.org/10.1177/0146167200267002
The duration settings (hold times) were found to be positively skewed. We reduced this skew by subjecting the duration settings to a log transformation. For each participant, we multiplied the transformed duration setting by the intensity setting for each trial separately.
Volume x √Duration, average of all trials (25)
Carnagey, N. L. & Anderson, C. A. (2005). The effects of reward and punishment in violent video games on aggressive affect, cognition, and behavior. Psychological Science, 16(11), 882-889. https://doi.org/10.1111/j.1467-9280.2005.01632.x
An aggressive-energy score was calculated for each trial by taking the square root of the duration of noise chosen for the opponent and multiplying this value by the intensity of the noise chosen.
One question that arises is whether we should include any other variables other than the BSCS since this is the only variable of interest giving an interesting effect. When defining the sample size for the preregistration, we got an sr2 of the primary model (blast * intensity) of .04, which is kind of large for these effects. However, that is for the model that controls for many other variables and interactions. When only using BSCS in the model, the effect is smaller, .03, thus changing the required sample size for the power analysis.
However, even if we were to include those variables, perhaps we need not include them as interaction terms, but simply as covariates. Let’s test this below by redefining model 3 again with all other variables as covariates.
Edit: after some exploration, the most parsimonious model seems to be to control for KIMS, but not only for KIMS, but also for its interaction with the condition.
big.mod3 <- lm(blastintensity.duration.t.w.s ~
condition_dum*KIMS.t.w.s +
condition_dum*BSCS.t.w.s,
data = data, na.action="na.exclude")
big.mod3 %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
blastintensity.duration.t.w.s | condition_dum | 248 | 0.14 | 1.10 | .272 | 0.00 |
blastintensity.duration.t.w.s | KIMS.t.w.s | 248 | 0.06 | 0.55 | .582 | 0.00 |
blastintensity.duration.t.w.s | BSCS.t.w.s | 248 | -0.22 | -2.39 | .018 | 0.02 |
blastintensity.duration.t.w.s | condition_dum:KIMS.t.w.s | 248 | -0.23 | -1.63 | .104 | 0.01 |
blastintensity.duration.t.w.s | condition_dum:BSCS.t.w.s | 248 | 0.47 | 3.26 | .001 | 0.04 |
Then we still get the sr2 = .04, without having to worry about administrating the SOPT, IAT, BAQ, etc.
So first study reports the large models with all variables, and the second study only the BSCS, along with the KIMS.
(Checking three-way interaction again just in case:)
big.mod <- lm(blastintensity.duration.t.w.s ~
condition_dum*KIMS.t.w.s +
condition_dum*BSCS.t.w.s*BAQ.t.w.s,
data = data, na.action = "na.exclude")
big.mod %>%
nice_lm(b.label = "B") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | β | t | p | sr2 |
blastintensity.duration.t.w.s | condition_dum | 244 | 0.12 | 0.87 | .387 | 0.00 |
blastintensity.duration.t.w.s | KIMS.t.w.s | 244 | 0.05 | 0.52 | .604 | 0.00 |
blastintensity.duration.t.w.s | BSCS.t.w.s | 244 | -0.12 | -1.12 | .262 | 0.00 |
blastintensity.duration.t.w.s | BAQ.t.w.s | 244 | 0.21 | 2.16 | .032 | 0.02 |
blastintensity.duration.t.w.s | condition_dum:KIMS.t.w.s | 244 | -0.21 | -1.52 | .131 | 0.01 |
blastintensity.duration.t.w.s | condition_dum:BSCS.t.w.s | 244 | 0.40 | 2.56 | .011 | 0.02 |
blastintensity.duration.t.w.s | condition_dum:BAQ.t.w.s | 244 | -0.05 | -0.34 | .731 | 0.00 |
blastintensity.duration.t.w.s | BSCS.t.w.s:BAQ.t.w.s | 244 | -0.06 | -0.84 | .404 | 0.00 |
blastintensity.duration.t.w.s | condition_dum:BSCS.t.w.s:BAQ.t.w.s | 244 | -0.11 | -0.94 | .348 | 0.00 |
report::cite_packages(sessionInfo())